BART-large-MNLI

by Meta

BART-large-MNLI

by Meta

Main use cases: A general language model based on the Transformer architecture. It was explicitly trained for intent recognition and text classification without examples (zero shot).  

 
Input length: 1024 tokens (approx. 768 words)  

 
Languages: predominantly English  

 
Model size: ~407 million parameters

Main use cases: A general language model based on the Transformer architecture. It was explicitly trained for intent recognition and text classification without examples (zero shot).  

 
Input length: 1024 tokens (approx. 768 words)  

 
Languages: predominantly English  

 
Model size: ~407 million parameters